翻訳と辞書
Words near each other
・ Symbols of Saskatchewan
・ Symbols of Scientology
・ Symbols of Serbia
・ Symbols of South Australia
・ Symbols of States of India
・ Symbols of Sussex
・ Symbols of Tamil Eelam
・ Symbols of Tamil Nadu
・ Symbols of Tasmania
・ Symbols of the Department of Magdalena
・ Symbols of the Federal Bureau of Investigation
・ Symbols of the Netherlands
・ Symbols of the Northwest Territories
・ Symbols of the Republic of Ireland
・ Symbols of the Republic of Macedonia
Symbol grounding problem
・ Symbol group
・ Symbol level
・ Symbol of a differential operator
・ Symbol of Chaos
・ Symbol of Life
・ Symbol of medicine
・ Symbol of Salvation
・ Symbol rate
・ Symbol Six
・ Symbol Systems
・ Symbol table
・ Symbol Technologies
・ Symbol theory of semiotics
・ Symbol Tower


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Symbol grounding problem : ウィキペディア英語版
Symbol grounding problem

The symbol grounding problem is related to the problem of how words (symbols) get their meanings, and hence to the problem of what meaning itself really is. The problem of meaning is in turn related to the problem of consciousness, or how it is that mental states are meaningful. According to a widely held theory of cognition called "computationalism," cognition (i.e., thinking) is just a form of computation. But computation in turn is just formal symbol manipulation: symbols are manipulated according to rules that are based on the symbols' shapes, not their meanings. How are those symbols (e.g., the words in our heads) connected to the things they refer to? It cannot be through the mediation of an external interpreter's head, because that would lead to an infinite regress, just as looking up the meanings of words in a (unilingual) dictionary of a language that one does not understand would lead to an infinite regress. The symbols in an autonomous hybrid symbolic+sensorimotor system—a Turing-scale robot consisting of both a symbol system and a sensorimotor system that reliably connects its internal symbols to the external objects they refer to, so it can interact with them Turing-indistinguishably from the way a person does—would be grounded. But whether its symbols would have meaning rather than just grounding is something that even the robotic Turing test—hence cognitive science itself—cannot determine, or explain.
==Words and meanings==
We know since Frege that the thing that a word refers to (i.e., its referent) is not the same as its meaning. This is most clearly illustrated using the proper names of concrete individuals, but it is also true of names of kinds of things and of abstract properties: (1) "Tony Blair," (2) "the prime minister of the UK during the year 2004," and (3) "Cherie Blair's husband" all have the same referent, but not the same meaning.〔It should be noted that although this article draws in places upon Frege's view of semantics, it is very anti-Fregean in stance. Frege was a fierce critic of psychological accounts that attempt to explain meaning in terms of mental states.〕
Some have suggested that the meaning of a (referring) word is the rule or features that one must use in order to successfully pick out its referent. In that respect, (2) and (3) come closer to wearing their meanings on their sleeves, because they are explicitly stating a rule for picking out their referents: "Find whoever was prime minister of the UK during the year 2004", or whoever is Cherie's current husband". But that does not settle the matter, because there's still the problem of the meaning of the components of that rule ("UK," "during," "current," "PM," "Cherie," "husband"), and how to pick ''them'' out.
Perhaps "Tony Blair" (or better still, just "Tony") does not have this recursive component problem, because it points straight to its referent, but how? If the meaning is the rule for picking out the referent, what is that rule, when we come down to non-decomposable components like proper names of individuals (or names of ''kinds'', as in "an unmarried man" is a "bachelor")?
It is probably unreasonable to expect us to know the rule for picking out the intended referents of our words—to know it explicitly, at least. Our brains do need to have the "know-how" to ''execute'' the rule, whatever it happens to be: they need to be able to actually pick out the intended referents of our words, such as "Tony Blair" or "bachelor." But ''we'' do not need to know consciously ''how'' our brains do that; we needn't know the rule. We can leave it to cognitive science and neuroscience to find out how our brains do it, and then explain the rule to us explicitly.
So if we take a word's meaning to be the means of picking out its referent, then meanings are in our brains. That is meaning in the ''narrow'' sense. If we use "meaning" in a ''wider'' sense, then we may want to say that meanings include both the referents themselves and the means of picking them out. So if a word (say, "Tony-Blair") is located inside an entity (e.g., oneself) that can use the word and pick out its referent, then the word's wide meaning consists of both the means that that entity uses to pick out its referent, and the referent itself: a wide causal nexus between (1) a head, (2) a word inside it, (3) an object outside it, and (4) whatever "processing" is required in order to successfully connect the inner word to the outer object.
But what if the "entity" in which a word is located is not a head but a piece of paper (or a computer screen)? What is its meaning then? Surely all the (referring) words on this screen, for example, have meanings, just as they have referents.
In 19th century, the semiotician Charles Saunders Peirce suggested what some think is a similar model: according to his triadic sign model, meaning requires (1) an interpreter, (2) a sign or representamen, (3) an object, and is (4) the virtual product of an endless regress and progress called Semiosis.〔Peirce, Charles S. The philosophy of Peirce: selected writings. New York: AMS Press, 1978.〕 Some have interpreted Peirce as addressing the problem of grounding, feelings, and intentionality for the understanding of semiotic processes.〔 Semeiosis and Intentionality T. L. Short Transactions of the Charles S. Peirce Society Vol. 17, No. 3 (Summer, 1981), pp. 197-223 〕 In recent years, Peirce's theory of signs is rediscovered by an increasing number of artificial intelligence researchers in the context of symbol grounding problem.〔C.S. Peirce and artificial intelligence: historical heritage and (new) theoretical stakes; Pierre Steiner; SAPERE - Special Issue on Philosophy and Theory of AI 5:265-276 (2013) 〕

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Symbol grounding problem」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.